safety culture
Developing a Safety Management System for the Autonomous Vehicle Industry
Wichner, David, Wishart, Jeffrey, Sergent, Jason, Swaminathan, Sunder
Safety Management Systems (SMSs) have been used in many safety-critical industries and are now being developed and deployed in the automated driving system (ADS)-equipped vehicle (AV) sector. Industries with decades of SMS deployment have established frameworks tailored to their specific context. Several frameworks for an AV industry SMS have been proposed or are currently under development. These frameworks borrow heavily from the aviation industry although the AV and aviation industries differ in many significant ways. In this context, there is a need to review the approach to develop an SMS that is tailored to the AV industry, building on generalized lessons learned from other safety-sensitive industries. A harmonized AV-industry SMS framework would establish a single set of SMS practices to address management of broad safety risks in an integrated manner and advance the establishment of a more mature regulatory framework. This paper outlines a proposed SMS framework for the AV industry based on robust taxonomy development and validation criteria and provides rationale for such an approach. Keywords: Safety Management System (SMS), Automated Driving System (ADS), ADS-Equipped Vehicle, Autonomous Vehicles (AV)
- North America > United States > District of Columbia > Washington (0.14)
- North America > United States > Florida > Palm Beach County > Boca Raton (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- (10 more...)
- Transportation > Ground > Road (1.00)
- Transportation > Air (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- (2 more...)
TechScape: The people charged with making sure AI doesn't destroy humanity have left the building
I'm in Seoul for the International AI summit, the half-year follow-up to last year's Bletchley Park AI safety summit (the full sequel will be in Paris this autumn). While you read this, the first day of events will have just wrapped up – though, in keeping with the reduced fuss this time round, that was merely a "virtual" leaders' meeting. When the date was set for this summit – alarmingly late in the day for, say, a journalist with two preschool children for whom four days away from home is a juggling act – it was clear that there would be a lot to cover. The inaugural AI safety summit at Bletchley Park in the UK last year announced an international testing framework for AI models, after calls … for a six-month pause in development of powerful systems. There has been no pause. The Bletchley declaration, signed by UK, US, EU, China and others, hailed the "enormous global opportunities" from AI but also warned of its potential for causing "catastrophic" harm.
- Europe > United Kingdom > England > Buckinghamshire > Milton Keynes (0.45)
- Asia > South Korea > Seoul > Seoul (0.25)
- Asia > China (0.25)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.96)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.96)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.49)
OpenAI putting 'shiny products' above safety, says departing researcher
A former senior employee at OpenAI has said the company behind ChatGPT is prioritising "shiny products" over safety, revealing that he quit after a disagreement over key aims reached "breaking point". Jan Leike was a key safety researcher at OpenAI as its co-head of superalignment, ensuring that powerful artificial intelligence systems adhere to human values and aims. His intervention comes before a global artificial intelligence summit in Seoul next week, where politicians, experts and tech executives will discuss oversight of the technology. Leike resigned days after the San Francisco-based company launched its latest AI model, GPT-4o. His departure means two senior safety figures at OpenAI have left this week following the resignation of Ilya Sutskever, OpenAI's co-founder and fellow co-head of superalignment.
- North America > United States > California > San Francisco County > San Francisco (0.26)
- Asia > South Korea > Seoul > Seoul (0.26)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
The Necessity of AI Audit Standards Boards
Manheim, David, Martin, Sammy, Bailey, Mark, Samin, Mikhail, Greutzmacher, Ross
Auditing of AI systems is a promising way to understand and manage ethical problems and societal risks associated with contemporary AI systems, as well as some anticipated future risks. Efforts to develop standards for auditing Artificial Intelligence (AI) systems have therefore understandably gained momentum. However, we argue that creating auditing standards is not just insufficient, but actively harmful by proliferating unheeded and inconsistent standards, especially in light of the rapid evolution and ethical and safety challenges of AI. Instead, the paper proposes the establishment of an AI Audit Standards Board, responsible for developing and updating auditing methods and standards in line with the evolving nature of AI technologies. Such a body would ensure that auditing practices remain relevant, robust, and responsive to the rapid advancements in AI. The paper argues that such a governance structure would also be helpful for maintaining public trust in AI and for promoting a culture of safety and ethical responsibility within the AI industry. Throughout the paper, we draw parallels with other industries, including safety-critical industries like aviation and nuclear energy, as well as more prosaic ones such as financial accounting and pharmaceuticals. AI auditing should emulate those fields, and extend beyond technical assessments to include ethical considerations and stakeholder engagement, but we explain that this is not enough; emulating other fields' governance mechanisms for these processes, and for audit standards creation, is a necessity. We also emphasize the importance of auditing the entire development process of AI systems, not just the final products...
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Europe > France (0.04)
- North America > United States > Maryland > Montgomery County > Gaithersburg (0.04)
- (7 more...)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
- (4 more...)
Do we need a National Algorithms Safety Board?
In the United States, the National Transportation Safety Board is widely respected for its prompt responses to investigate plane, train, and boat accidents. Its independent reports have done much to promote safety in civil aviation and beyond. Could a National Algorithms Safety Board have a similar impact in increasing safety for algorithmic systems, especially the rapidly proliferating Artificial Intelligence applications based on unpredictable machine learning? Alternatively, could agencies such as the Food & Drug Administration (FDA), Securities and Exchange Commission (SEC), or Federal Communications Commission (FCC) take on the task of increasing safety of algorithmic systems? In addition to federal agencies, could the major accounting firms provide algorithmic audits as they do in auditing financial statements of publicly listed companies?
- North America > United States > Maryland (0.05)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- Transportation > Air (1.00)
- Health & Medicine (1.00)
- Government > Regional Government > North America Government > United States Government > FDA (0.72)
Welcome! You are invited to join a webinar: Food and Drink Industries Group: AI reducing accidents & promoting a positive safety culture. After registering, you will receive a confirmation email about joining the webinar.
Since the 1990s, organisations have attempted to improve safety culture. They have tried top-down approaches, with senior management making statements about how important safety is. However, many safety culture programmes stall after some initial improvements. Senior managers have invested in the programme. Workers on the frontline want to be safer and healthier. For a busy manager there just isn’t time to get ahead with improvement programmes while they are doing the day-to-day supervision and management tasks. AI can provide some solutions to this stalling pattern, providing the extra set of hands – and eyes – that will help managers and supervisors to get on top of the workload and spend more time developing a proactive safety culture. Our AI Reducing Accidents & Promoting A Positive Safety Culture Webinar will tackle the following: • Relationship between safety culture and AI • AI promoting a proactive and learning safety culture by • Reporting • Mutual trust • Active participation of workers M&S use case • How their EHS team reduced incidents by 80% at their Castle Donington site • How AI contributed to raising awareness of safety amongst floor workers
- Information Technology > Communications > Web (1.00)
- Information Technology > Artificial Intelligence (0.90)
ai-technology-helps-fleet-managers-increase-safety-and-efficiency
Artificial Intelligence (AI), which is used in fleet management, dramatically improves the efficiency of transportation companies. Combining machine learning and AI, AI uses data from driver learning patterns to make tailored predictions. Barrett Young, SVP of Marketing at Netradyne shares how fleets can harness technology to improve and automate their decisions, increase driver safety and decrease vehicle downtime. Transport companies have started to realize the benefits of Artificial Intelligence. This includes reducing risk in the cab and the road, managing costs better and improving compliance.
Argo AI Establishes the Argo Safety Advisory Council - Argo AI
As it advances its mission to make the world's streets and roadways safe, accessible, and useful for all, Argo AI announced today it has established the Argo Safety Advisory Council comprising top experts and industry leaders in the fields of transportation, medicine, law enforcement, and cybersecurity. The Argo Safety Advisory Council will provide external strategic counsel on Argo's safety and security practices and policies, including feedback on maintaining a world-class safety culture, earning public trust in autonomous vehicles, scaling safely across multiple cities and countries, and responsibly launching and operating commercial driverless services. Argo created the Council proactively, further underscoring the company's focus and commitment to bringing autonomous products and services to market safely. "At Argo, our foundational value is safety," said Bryan Salesky, Founder and CEO, Argo AI. "Autonomous vehicles have the potential to profoundly and positively impact transportation safety and accessibility in cities. I am grateful for the Argo Safety Advisory Council to share their collective wisdom and expertise to help Argo realize this goal."
- North America > United States > California > Santa Clara County > Palo Alto (0.06)
- North America > United States > California > Los Angeles County > Los Angeles (0.06)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.06)
- Information Technology > Security & Privacy (0.57)
- Transportation > Ground > Road (0.33)
AGI Agent Safety by Iteratively Improving the Utility Function
While it is still unclear if agents with Artificial General Intelligence (AGI) could ever be built, we can already use mathematical models to investigate potential safety systems for these agents. We present an AGI safety layer that creates a special dedicated input terminal to support the iterative improvement of an AGI agent's utility function. The humans who switched on the agent can use this terminal to close any loopholes that are discovered in the utility function's encoding of agent goals and constraints, to direct the agent towards new goals, or to force the agent to switch itself off. An AGI agent may develop the emergent incentive to manipulate the above utility function improvement process, for example by deceiving, restraining, or even attacking the humans involved. The safety layer will partially, and sometimes fully, suppress this dangerous incentive. The first part of this paper generalizes earlier work on AGI emergency stop buttons. We aim to make the mathematical methods used to construct the layer more accessible, by applying them to an MDP model. We discuss two provable properties of the safety layer, and show ongoing work in mapping it to a Causal Influence Diagram (CID). In the second part, we develop full mathematical proofs, and show that the safety layer creates a type of bureaucratic blindness. We then present the design of a learning agent, a design that wraps the safety layer around either a known machine learning system, or a potential future AGI-level learning system. The resulting agent will satisfy the provable safety properties from the moment it is first switched on. Finally, we show how this agent can be mapped from its model to a real-life implementation. We review the methodological issues involved in this step, and discuss how these are typically resolved.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Netherlands > North Brabant > Eindhoven (0.04)
- Overview (1.00)
- Research Report (0.82)
- Automobiles & Trucks (0.96)
- Government (0.93)
- Law (0.92)
- (2 more...)
Ask the Expert: Artificial Intelligence in EHS Enablon
The SIIA CODiE Awards are the premier awards for the software and information industries, and have been recognizing product excellence for over 30 years. I spoke recently with Martin Vauthier, Head of Artificial Intelligence Analytics at Enablon. Here are Martin's answers to five questions, which give you a great idea of what's happening with AI, especially regarding the use of AI in workplace safety. Artificial Intelligence is another buzzword we hear often. Sometimes there's a disconnect between how much we hear about a technological innovation, and how much it is actually used.
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Data Science > Data Mining (0.49)